39 research outputs found

    Transfer Learning with Deep Convolutional Neural Network (CNN) for Pneumonia Detection using Chest X-ray

    Get PDF
    Pneumonia is a life-threatening disease, which occurs in the lungs caused by either bacterial or viral infection. It can be life-endangering if not acted upon in the right time and thus an early diagnosis of pneumonia is vital. The aim of this paper is to automatically detect bacterial and viral pneumonia using digital x-ray images. It provides a detailed report on advances made in making accurate detection of pneumonia and then presents the methodology adopted by the authors. Four different pre-trained deep Convolutional Neural Network (CNN)- AlexNet, ResNet18, DenseNet201, and SqueezeNet were used for transfer learning. 5247 Bacterial, viral and normal chest x-rays images underwent preprocessing techniques and the modified images were trained for the transfer learning based classification task. In this work, the authors have reported three schemes of classifications: normal vs pneumonia, bacterial vs viral pneumonia and normal, bacterial and viral pneumonia. The classification accuracy of normal and pneumonia images, bacterial and viral pneumonia images, and normal, bacterial and viral pneumonia were 98%, 95%, and 93.3% respectively. This is the highest accuracy in any scheme than the accuracies reported in the literature. Therefore, the proposed study can be useful in faster-diagnosing pneumonia by the radiologist and can help in the fast airport screening of pneumonia patients.Comment: 13 Figures, 5 tables. arXiv admin note: text overlap with arXiv:2003.1314

    An Intelligent and Low-cost Eye-tracking System for Motorized Wheelchair Control

    Full text link
    In the 34 developed and 156 developing countries, there are about 132 million disabled people who need a wheelchair constituting 1.86% of the world population. Moreover, there are millions of people suffering from diseases related to motor disabilities, which cause inability to produce controlled movement in any of the limbs or even head.The paper proposes a system to aid people with motor disabilities by restoring their ability to move effectively and effortlessly without having to rely on others utilizing an eye-controlled electric wheelchair. The system input was images of the users eye that were processed to estimate the gaze direction and the wheelchair was moved accordingly. To accomplish such a feat, four user-specific methods were developed, implemented and tested; all of which were based on a benchmark database created by the authors.The first three techniques were automatic, employ correlation and were variants of template matching, while the last one uses convolutional neural networks (CNNs). Different metrics to quantitatively evaluate the performance of each algorithm in terms of accuracy and latency were computed and overall comparison is presented. CNN exhibited the best performance (i.e. 99.3% classification accuracy), and thus it was the model of choice for the gaze estimator, which commands the wheelchair motion. The system was evaluated carefully on 8 subjects achieving 99% accuracy in changing illumination conditions outdoor and indoor. This required modifying a motorized wheelchair to adapt it to the predictions output by the gaze estimation algorithm. The wheelchair control can bypass any decision made by the gaze estimator and immediately halt its motion with the help of an array of proximity sensors, if the measured distance goes below a well-defined safety margin.Comment: Accepted for publication in Sensor, 19 Figure, 3 Table

    A Lightweight Deep Learning Based Microwave Brain Image Network Model for Brain Tumor Classification Using Reconstructed Microwave Brain (RMB) Images

    Get PDF
    Computerized brain tumor classification from the reconstructed microwave brain (RMB) images is important for the examination and observation of the development of brain disease. In this paper, an eight-layered lightweight classifier model called microwave brain image network (MBINet) using a self-organized operational neural network (Self-ONN) is proposed to classify the reconstructed microwave brain (RMB) images into six classes. Initially, an experimental antenna sensor-based microwave brain imaging (SMBI) system was implemented, and RMB images were collected to create an image dataset. It consists of a total of 1320 images: 300 images for the non-tumor, 215 images for each single malignant and benign tumor, 200 images for each double benign tumor and double malignant tumor, and 190 images for the single benign and single malignant tumor classes. Then, image resizing and normalization techniques were used for image preprocessing. Thereafter, augmentation techniques were applied to the dataset to make 13,200 training images per fold for 5-fold cross-validation. The MBINet model was trained and achieved accuracy, precision, recall, F1-score, and specificity of 96.97%, 96.93%, 96.85%, 96.83%, and 97.95%, respectively, for six-class classification using original RMB images. The MBINet model was compared with four Self-ONNs, two vanilla CNNs, ResNet50, ResNet101, and DenseNet201 pre-trained models, and showed better classification outcomes (almost 98%). Therefore, the MBINet model can be used for reliably classifying the tumor(s) using RMB images in the SMBI system. 2023 by the authors.This work was supported by the Universiti Kebangsaan Malaysia project grant code DIP-2021-024. This work was also supported by Grant NPRP12S-0227-190164 from the Qatar National Research Fund, a member of the Qatar Foundation, Doha, Qatar, and the claims made herein are solely the responsibility of the authors. Open access publication is supported by the Qatar National Library.Scopu

    Deep Learning Technique for Congenital Heart Disease Detection Using Stacking-Based CNN-LSTM Models from Fetal Echocardiogram: A Pilot Study

    Get PDF
    Congenital heart defects (CHDs) are a leading cause of death in infants under 1 year of age. Prenatal intervention can reduce the risk of postnatal serious CHD patients, but current diagnosis is based on qualitative criteria, which can lead to variability in diagnosis between clinicians. Objectives: To detect morphological and temporal changes in cardiac ultrasound (US) videos of fetuses with hypoplastic left heart syndrome (HLHS) using deep learning models. A small cohort of 9 healthy and 13 HLHS patients were enrolled, and ultrasound videos at three gestational time points were collected. The videos were preprocessed and segmented to cardiac cycle videos, and five different deep learning CNN-LSTM models were trained (MobileNetv2, ResNet18, ResNet50, DenseNet121, and GoogleNet). The top-performing three models were used to develop a novel stacking CNN-LSTM model, which was trained using five-fold cross-validation to classify HLHS and healthy patients. The stacking CNN-LSTM model outperformed other pre-trained CNN-LSTM models with the accuracy, precision, sensitivity, F1 score, and specificity of 90.5%, 92.5%, 92.5%, 92.5%, and 85%, respectively for video-wise classification, and with the accuracy, precision, sensitivity, F1 score, and specificity of 90.5%, 92.5%, 92.5%, 92.5%, and 85%, respectively for subject-wise classification using ultrasound videos. This study demonstrates the potential of using deep learning models to classify CHD prenatal patients using ultrasound videos, which can aid in the objective assessment of the disease in a clinical setting.This study was funded by Qatar National Research Fund (QNRF), National Priorities Research Program (NPRP 10-0123-170222). The open access publication of this article was funded by the Qatar National Library
    corecore